skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Pundla, Sai Abhideep"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Data centers have complex environments that undergo constant changes due to fluctuations in IT load, commissioning and decommissioning of IT equipment, heterogeneous rack architectures and varying environmental conditions. These dynamic factors often pose challenges in effectively provisioning cooling systems, resulting in higher energy consumption. To address this issue, it is crucial to consider data center thermal heterogeneity when allocating workloads and controlling cooling, as it can impact operational efficiency. Computational Fluid Dynamics (CFD) models are used to simulate data center heterogeneity and analyze the impact of two different cooling mechanisms on operational efficiency. This research focuses on comparing the cooling based on facility water for Rear Door Heat Exchanger (RDHx) and conventional Computer Room Air Conditioning (CRAH) systems in two different data center configurations. Efficiency is measured in terms of ΔT across facility water. Higher ΔT will result in efficient operation of chillers. The actual chiller efficiency is not calculated as it would depend on local ambient conditions in which the chiller is operated. The first data center model represents a typical enterpriselevel configuration where all servers and racks have homogeneous IT power. The second model represents a colocation facility where server/rack power configurations are randomly distributed. These models predict temperature variations at different locations based on IT workload and cooling parameters. Traditionally, CRAH configurations are selected based on total IT power consumption, rack power density, and required cooling capacity for the entire data center space. On the other hand, RDHx can be scaled based on individual rack power density, offering localized cooling advantages. Multiple workload distribution scenarios were simulated for both CRAH and RDHx-based data center models. The results showed that RDHx provides a uniform thermal profile across the data center, irrespective of server/rack power density or workload distribution. This characteristic reduces the risk of over- or under-provisioning racks when using RDHx. Operational efficiency is compared in terms of difference in supply and return temperature of facility water for CRAH and RDHx units based on spatial heat dissipation and workload distribution. RDHx demonstrated excellent cooling capabilities while maintaining a higher ΔT, resulting in reduced cooling energy consumption, operational carbon footprint (?), and water usage. 
    more » « less
  2. Abstract Data centers are witnessing an unprecedented increase in processing and data storage, resulting in an exponential increase in the servers’ power density and heat generation. Data center operators are looking for green energy efficient cooling technologies with low power consumption and high thermal performance. Typical air-cooled data centers must maintain safe operating temperatures to accommodate cooling for high power consuming server components such as CPUs and GPUs. Thus, making air-cooling inefficient with regards to heat transfer and energy consumption for applications such as high-performance computing, AI, cryptocurrency, and cloud computing, thereby forcing the data centers to switch to liquid cooling. Additionally, air-cooling has a higher OPEX to account for higher server fan power. Liquid Immersion Cooling (LIC) is an affordable and sustainable cooling technology that addresses many of the challenges that come with air cooling technology. LIC is becoming a viable and reliable cooling technology for many high-power demanding applications, leading to reduced maintenance costs, lower water utilization, and lower power consumption. In terms of environmental effect, single-phase immersion cooling outperforms two-phase immersion cooling. There are two types of single-phase immersion cooling methods namely, forced and natural convection. Here, forced convection has a higher overall heat transfer coefficient which makes it advantageous for cooling high-powered electronic devices. Obviously, with natural convection, it is possible to simplify cooling components including elimination of pump. There is, however, some advantages to forced convection and especially low velocity flow where the pumping power is relatively negligible. This study provides a comparison between a baseline forced convection single phase immersion cooled server run for three different inlet temperatures and four different natural convection configurations that utilize different server powers and cold plates. Since the buoyancy effect of the hot fluid is leveraged to generate a natural flow in natural convection, cold plates are designed to remove heat from the server. For performance comparison, a natural convection model with cold plates is designed where water is the flowing fluid in the cold plate. A high-density server is modeled on the Ansys Icepak, with a total server heat load of 3.76 kW. The server is made up of two CPUs and eight GPUs with each chip having its own thermal design power (TDPs). For both heat transfer conditions, the fluid used in the investigation is EC-110, and it is operated at input temperatures of 30°C, 40°C, and 50°C. The coolant flow rate in forced convection is 5 GPM, whereas the flow rate in natural convection cold plates is varied. CFD simulations are used to reduce chip case temperatures through the utilization of both forced and natural convection. Pressure drop and pumping power of operation are also evaluated on the server for the given intake temperature range, and the best-operating parameters are established. The numerical study shows that forced convection systems can maintain much lower component temperatures in comparison to natural convection systems even when the natural convection systems are modeled with enhanced cooling characteristics. 
    more » « less
  3. Abstract Data centers have started to adopt immersion cooling for more than just mainframes and supercomputers. Due to the inability of air cooling to cool down recent high-configured servers with higher Thermal Design Power, current thermal requirements in machine learning, AI, blockchain, 5G, edge computing, and high-frequency trading have resulted in a larger deployment of immersion cooling. Dielectric fluids are far more efficient at transferring heat than air. Immersion cooling promises to help address many of the challenges that come with air cooling systems, especially as computing densities increase. Immersion-cooled data centers are more expandable, quicker installation, more energy-efficient, allows for the cooling of almost all server components, save more money for enterprises, and are more robust overall. By eliminating active cooling components such as fans, immersion cooling enables a significantly higher density of computing capabilities. When utilizing immersion cooling for server hardware that is intended to be air-cooled, immersion-specific optimized heat sinks should be used. A heat sink is an important component for server cooling efficacy. This research conducts an optimization of heatsink for immersion-cooled servers to achieve the minimum case temperature possible utilizing multi-objective and multidesign variable optimization with pumping power as the constraint. A high-density server of 3.76 kW was modeled on Ansys Icepak that consists of 2 CPUs and 8 GPUs with heatsink assemblies at their Thermal Design Power along with 32 Dual In-line Memory Modules. The optimization is conducted for Aluminum heat sinks by minimizing the pressure drop and thermal resistance as the objective functions whereas fin count, fin thickness, and heat sink height are chosen as the design variables in all CPUs, and GPUs heatsink assemblies. Optimization for the CPU and the GPU heatsink was done separately and then the optimized heatsinks were tested in an actual test setup of the server in ANSYS Icepak. The dielectric fluid for this numerical study is EC-110 and the cooling is carried out using forced convection. A Design of Experiment (DOE) is created based on the input range of design variables using a full-factorial approach to generate multiple design points. The effect of the design variables is analyzed on the objective functions to establish the parameters that have a greater impact on the performance of the optimized heatsink. The optimization study is done using Ansys OptiSLang where AMOP (Adaptive Metamodel of Optimal Prognosis) as the sampling method for design exploration. The results show total effect values of heat sinks geometric parameters to choose the best design point with the help of a Response Surface 2D and 3D plot for the individual heat sink assembly. 
    more » « less